Current Issue : July - September Volume : 2015 Issue Number : 3 Articles : 5 Articles
In several application contexts inmultimedia field (educational, extreme gaming), the interaction with the user requests that system\nis able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that\nmusic is interesting to listen. Understanding and modeling expressive content communication is important for many engineering\napplications in information technology (e.g.,Music Information Retrieval, as well as several applications in the affective computing\nfield). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying\na smooth morphing among performances with different expressive content in order to adapt the audio expressive character to\nthe user�s desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that\norganizes contests for computer systems generating expressive musical performances....
Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance ofHuman-\nComputer Interface (HCI). The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which\nare not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to\nexisting traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction\nusing real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$\nalgorithm for hand�s template matching. Then every hand gesture is translated to commands that can be used to control robot\nmovements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time\nunder different light changes, scales, rotation, and background....
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services.\nAlthough spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for\nusers is still not well resolved, which is important in the mobile environment where location based services users are impeded\nby device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not\njust to predict performance outcomes, such as whether people will be able to find the information needed to complete a humancomputer\ninteraction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed\ndesign of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location\nbased services interface is proposed,which contains three major sections: purpose, adjustment, and adaptation. Based on thismodel\nwe try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users\nand the interface. Then we show how the model applies users� demands in a complicated environment and suggested the feasibility\nby the experimental results....
Interventional radiology procedures require\nextensive cognitive processing from the physician. A set of\nthese cognitive functions are aimed to be replaced by\ntechnology in order to reduce the cognitive load. However,\nlimited knowledge is available regarding mental processes\nin interventional radiology. This research focuses on\nidentifying mental modelââ?¬â??related processes, in particular\nduring percutaneous procedures, useful to improve image\nguidance during interventions. Ethnographic studies and a\nprototype-based study were conducted in order to perform\na task analysis and to identify working strategies and\ncognitive processes. Data were compared to theories from\nvisual imagery. The results indicate a high level of complexity\nof mental model construction and manipulation,\nin particular when mentally comparing mental model\nknowledge with radiology images on screen (e.g., to steer a\nneedle correctly). Regarding current interface support,\nmost difficult is the interpretation and selection of oblique\nviews. New interface principles are needed to bring cognitive\ndemands within reasonable human range, and\nalso accompanying cognitive work strategies should be\ndeveloped....
Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability\nand utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with\nexperiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture\nthat will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user\ninterfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study\nis to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules,\nembodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed\nusers� cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to\nprovide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces� design\nsupported by increased tasks� complexity to capture a more detailed profile of users� capabilities and limitations.The final outcome\nis a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces� evaluation\nthrough simulation on the basis of virtual models of MCI users....
Loading....